21 research outputs found
Time Efficient Approach To Offline Hand Written Character Recognition Using Associative Memory Net
In this paper, an efficient Offline Hand Written Character Recognition
algorithm is proposed based on Associative Memory Net (AMN). The AMN used in
this work is basically auto associative. The implementation is carried out
completely in 'C' language. To make the system perform to its best with minimal
computation time, a Parallel algorithm is also developed using an API package
OpenMP. Characters are mainly English alphabets (Small (26), Capital (26))
collected from system (52) and from different persons (52). The characters
collected from system are used to train the AMN and characters collected from
different persons are used for testing the recognition ability of the net. The
detailed analysis showed that the network recognizes the hand written
characters with recognition rate of 72.20% in average case. However, in best
case, it recognizes the collected hand written characters with 88.5%. The
developed network consumes 3.57 sec (average) in Serial implementation and 1.16
sec (average) in Parallel implementation using OpenMP
English Character Recognition using Artificial Neural Network
This work focuses on development of a Offline Hand Written English Character
Recognition algorithm based on Artificial Neural Network (ANN). The ANN
implemented in this work has single output neuron which shows whether the
tested character belongs to a particular cluster or not. The implementation is
carried out completely in 'C' language. Ten sets of English alphabets
(small-26, capital-26) were used to train the ANN and 5 sets of English
alphabets were used to test the network. The characters were collected from
different persons over duration of about 25 days. The algorithm was tested with
5 capital letters and 5 small letter sets. However, the result showed that the
algorithm recognized English alphabet patterns with maximum accuracy of 92.59%
and False Rejection Rate (FRR) of 0%.Comment: appeared in Proceedings of National Conference on Artificial
Intelligence, Robotics and Embedded Systems (AIRES-2012), Andhra University,
Vishakhapatnam, India (29-30 June, 2012), pp. 7-
Parallel Algorithm for Longest Common Subsequence in a String
In the area of Pattern Recognition and Matching, finding a Longest Common
Subsequence plays an important role. In this paper, we have proposed one
algorithm based on parallel computation. We have used OpenMP API package as
middleware to send the data to different processors. We have tested our
algorithm in a system having four processors and 2 GB physical memory. The best
result showed that the parallel algorithm increases the performance (speed of
computation) by 3.22.Comment: appeared in: Proceedings of National Conference on Artificial
Intelligence, Robotics and Embedded Systems (AIRES) - 2012, Andhra
University, Visakhapatnam (29-30 June, 2012), pp. 66-6
Non-Correlated Character Recognition using Artificial Neural Network
This paper investigates a method of Handwritten English Character Recognition
using Artificial Neural Network (ANN). This work has been done in offline
Environment for non correlated characters, which do not possess any linear
relationships among them. We test that whether the particular tested character
belongs to a cluster or not. The implementation is carried out in Matlab
environment and successfully tested. Fifty-two sets of English alphabets are
used to train the ANN and test the network. The algorithms are tested with 26
capital letters and 26 small letters. The testing result showed that the
proposed ANN based algorithm showed a maximum recognition rate of 85%.Comment: appeared in: proceedings of National Conference on Dynamics and
Prospects of Data Mining: Theory and Practices (DPDM)-2012; September 30,
2012, India; Publisher: OITS-BLS, Balasore Chapter; Proceeding ISBN:
987-93-81361-31-6, pp. 79-8
A Novel Approach for Intelligent Robot Path Planning
Path planning of Robot is one of the challenging fields in the area of
Robotics research. In this paper, we proposed a novel algorithm to find path
between starting and ending position for an intelligent system. An intelligent
system is considered to be a device/robot having an antenna connected with
sensor-detector system. The proposed algorithm is based on Neural Network
training concept. The considered neural network is Adapti ve to the knowledge
bases. However, implementation of this algorithm is slightly expensive due to
hardware it requires. From detailed analysis, it can be proved that the
resulted path of this algorithm is efficient.Comment: appeared in: Proceedings of National Conference on Artificial
Intelligence, Robotics and Embedded Systems (AIRES) - 2012, Andhra
University, Visakhapatnam (29-30 June, 2012), pp. 388-39
A study on the use of Boundary Equilibrium GAN for Approximate Frontalization of Unconstrained Faces to aid in Surveillance
Face frontalization is the process of synthesizing frontal facing views of
faces given its angled poses. We implement a generative adversarial network
(GAN) with spherical linear interpolation (Slerp) for frontalization of
unconstrained facial images. Our special focus is intended towards the
generation of approximate frontal faces of the side posed images captured from
surveillance cameras. Specifically, the present work is a comprehensive study
on the implementation of an auto-encoder based Boundary Equilibrium GAN (BEGAN)
to generate frontal faces using an interpolation of a side view face and its
mirrored view. To increase the quality of the interpolated output we implement
a BEGAN with Slerp. This approach could produce a promising output along with a
faster and more stable training for the model. The BEGAN model additionally has
a balanced generator-discriminator combination, which prevents mode collapse
along with a global convergence measure. It is expected that such an
approximate face generation model would be able to replace face composites used
in surveillance and crime detection
Incorporating Symbolic Domain Knowledge into Graph Neural Networks
Our interest is in scientific problems with the following characteristics:
(1) Data are naturally represented as graphs; (2) The amount of data available
is typically small; and (3) There is significant domain-knowledge, usually
expressed in some symbolic form. These kinds of problems have been addressed
effectively in the past by Inductive Logic Programming (ILP), by virtue of 2
important characteristics: (a) The use of a representation language that easily
captures the relation encoded in graph-structured data, and (b) The inclusion
of prior information encoded as domain-specific relations, that can alleviate
problems of data scarcity, and construct new relations. Recent advances have
seen the emergence of deep neural networks specifically developed for
graph-structured data (Graph-based Neural Networks, or GNNs). While GNNs have
been shown to be able to handle graph-structured data, less has been done to
investigate the inclusion of domain-knowledge. Here we investigate this aspect
of GNNs empirically by employing an operation we term "vertex-enrichment" and
denote the corresponding GNNs as "VEGNNs". Using over 70 real-world datasets
and substantial amounts of symbolic domain-knowledge, we examine the result of
vertex-enrichment across 5 different variants of GNNs. Our results provide
support for the following: (a) Inclusion of domain-knowledge by
vertex-enrichment can significantly improve the performance of a GNN. That is,
the performance VEGNNs is significantly better than GNNs across all GNN
variants; (b) The inclusion of domain-specific relations constructed using ILP
improves the performance of VEGNNs, across all GNN variants. Taken together,
the results provide evidence that it is possible to incorporate symbolic domain
knowledge into a GNN, and that ILP can play an important role in providing
high-level relationships that are not easily discovered by a GNN.Comment: Accepted in Machine Learning Journal (MLJ
Incorporating Domain Knowledge into Deep Neural Networks
We present a survey of ways in which domain-knowledge has been included when
constructing models with neural networks. The inclusion of domain-knowledge is
of special interest not just to constructing scientific assistants, but also,
many other areas that involve understanding data using human-machine
collaboration. In many such instances, machine-based model construction may
benefit significantly from being provided with human-knowledge of the domain
encoded in a sufficiently precise form. This paper examines two broad
approaches to encode such knowledge--as logical and numerical constraints--and
describes techniques and results obtained in several sub-categories under each
of these approaches.Comment: Submitted to IJCAI-2021 Survey Track (6+2 pages
Performance evaluation of deep neural networks for forecasting time-series with multiple structural breaks and high volatility
The problem of automatic and accurate forecasting of time-series data has
always been an interesting challenge for the machine learning and forecasting
community. A majority of the real-world time-series problems have
non-stationary characteristics that make the understanding of trend and
seasonality difficult. Our interest in this paper is to study the applicability
of the popular deep neural networks (DNN) as function approximators for
non-stationary TSF. We evaluate the following DNN models: Multi-layer
Perceptron (MLP), Convolutional Neural Network (CNN), and RNN with Long-Short
Term Memory (LSTM-RNN) and RNN with Gated-Recurrent Unit (GRU-RNN). These DNN
methods have been evaluated over 10 popular Indian financial stocks data.
Further, the performance evaluation of these DNNs has been carried out in
multiple independent runs for two settings of forecasting: (1) single-step
forecasting, and (2) multi-step forecasting. These DNN methods show convincing
performance for single-step forecasting (one-day ahead forecast). For the
multi-step forecasting (multiple days ahead forecast), we have evaluated the
methods for different forecast periods. The performance of these methods
demonstrates that long forecast periods have an adverse effect on performance.Comment: Preprint (18 pages
LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting
In this article, we present our methodologies for SemEval-2021 Task-4:
Reading Comprehension of Abstract Meaning. Given a fill-in-the-blank-type
question and a corresponding context, the task is to predict the most suitable
word from a list of 5 options. There are three sub-tasks within this task:
Imperceptibility (subtask-I), Non-Specificity (subtask-II), and Intersection
(subtask-III). We use encoders of transformers-based models pre-trained on the
masked language modelling (MLM) task to build our Fill-in-the-blank (FitB)
models. Moreover, to model imperceptibility, we define certain linguistic
features, and to model non-specificity, we leverage information from hypernyms
and hyponyms provided by a lexical database. Specifically, for non-specificity,
we try out augmentation techniques, and other statistical techniques. We also
propose variants, namely Chunk Voting and Max Context, to take care of input
length restrictions for BERT, etc. Additionally, we perform a thorough ablation
study, and use Integrated Gradients to explain our predictions on a few
samples. Our best submissions achieve accuracies of 75.31% and 77.84%, on the
test sets for subtask-I and subtask-II, respectively. For subtask-III, we
achieve accuracies of 65.64% and 62.27%.Comment: 10 pages, 4 figures, SemEval-2021 Workshop, ACL-IJCNLP 202